In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an property of an estimator. Bias is a distinct concept from consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased (see bias versus consistency for more).
All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because a biased estimator may be unbiased with respect to different measures of central tendency; because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful.
Bias can also be measured with respect to the median, rather than the mean (expected value), in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. Mean-unbiasedness is not preserved under non-linear transformations, though median-unbiasedness is (see ); for example, the sample variance is a biased estimator for the population variance. These are all illustrated below.
An unbiased estimator for a parameter need not always exist. For example, there is no unbiased estimator for the reciprocal of the parameter of a binomial random variable.
where denotes expected value over the distribution (i.e., averaging over all possible observations ). The second equation follows since θ is measurable with respect to the conditional distribution .
An estimator is said to be unbiased if its bias is zero for all values of the parameter θ, or equivalently, if the expected value of the estimator matches that of the parameter. Unbiasedness is not guaranteed to carry over. For example, if is an unbiased estimator for parameter θ, it is not guaranteed in general that g() is an unbiased estimator for g(θ), unless g is a linear function.
In a simulation experiment concerning the properties of an estimator, the bias of the estimator may be assessed using the mean signed difference.
Suppose X1, ..., X n are independent and identically distributed (i.i.d.) random variables with expected value μ and variance σ2. If the sample mean and uncorrected sample variance are defined as
then S2 is a biased estimator of σ2. This follows immediately from the law of total variance because
In other words, the expected value of the uncorrected sample variance does not equal the population variance σ2, unless multiplied by a normalization factor. The ratio between the biased (uncorrected) and unbiased estimates of the variance is known as Bessel's correction. The sample mean, on the other hand, is an unbiased estimator of the population mean μ. The equality of the second term on the right-hand side in the equation above can be understood in terms of Bienaymé's identity,
The reason that an uncorrected sample variance, S2, is biased stems from the fact that the sample mean is an ordinary least squares (OLS) estimator for μ: is the number that makes the sum as small as possible. That is, when any other number is plugged into this sum, the sum can only increase. In particular, the choice gives,
and then
The above discussion can be understood in geometric terms: the vector can be decomposed into the "mean part" and "variance part" by projecting to the direction of and to that direction's orthogonal complement hyperplane. One gets for the part along and for the complementary part. Since this is an orthogonal decomposition, Pythagorean theorem says , and taking expectations we get , as above (but times ). If the distribution of is rotationally symmetric, as in the case when are sampled from a Gaussian, then on average, the dimension along contributes to equally as the directions perpendicular to , so that and . This is in fact true in general, as explained above.
with a sample of size 1. (For example, when incoming calls at a telephone switchboard are modeled as a Poisson process, and λ is the average number of calls per minute, then e−2 λ (the estimand) is the probability that no calls arrive in the next two minutes.)
Since the expectation of an unbiased estimator δ( X) is equal to the estimand, i.e.
the only function of the data constituting an unbiased estimator is
To see this, note that when decomposing e− λ from the above expression for expectation, the sum that is left is a Taylor series expansion of e− λ as well, yielding e− λe− λ = e−2 λ (see Characterizations of the exponential function).
If the observed value of X is 100, then the estimate is 1, although the true value of the quantity being estimated is very likely to be near 0, which is the opposite extreme. And, if X is observed to be 101, then the estimate is even more absurd: It is −1, although the quantity being estimated must be positive.
The (biased) maximum likelihood estimator
is far better than this unbiased estimator. Not only is its value always positive but it is also more accurate in the sense that its mean squared error
is smaller; compare the unbiased estimator's MSE of
The MSEs are functions of the true value λ. The bias of the maximum-likelihood estimator is:
Further properties of median-unbiased estimators have been noted by Lehmann, Birnbaum, van der Vaart and Pfanzagl.; ; ; In particular, median-unbiased estimators exist in cases where mean-unbiased and maximum-likelihood estimators do not exist. They are invariant under one-to-one transformations.
There are methods of construction median-unbiased estimators for probability distributions that have monotone likelihood-functions, such as one-parameter exponential families, to ensure that they are optimal (in a sense analogous to minimum-variance property considered for mean-unbiased estimators). One such procedure is an analogue of the Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao–Blackwell procedure for mean-unbiased estimation but for a larger class of loss-functions.
When the parameter is a vector, an analogous decomposition applies: where is the trace (diagonal sum) of the covariance matrix of the estimator and is the square vector norm.
is sought for the population variance as above, but this time to minimise the MSE:
If the variables X1 ... X n follow a normal distribution, then nS2/σ2 has a chi-squared distribution with n − 1 degrees of freedom, giving:
and so
With a little algebra it can be confirmed that it is c = 1/( n + 1) which minimises this combined loss function, rather than c = 1/( n − 1) which minimises just the square of the bias.
More generally it is only in restricted classes of problems that there will be an estimator that minimises the MSE independently of the parameter values.
However it is very common that there may be perceived to be a bias–variance tradeoff, such that a small increase in bias can be traded for a larger decrease in variance, resulting in a more desirable estimator overall.
Fundamentally, the difference between the Bayesian approach and the sampling-theory approach above is that in the sampling-theory approach the parameter is taken as fixed, and then probability distributions of a statistic are considered, based on the predicted sampling distribution of the data. For a Bayesian, however, it is the data which are known, and fixed, and it is the unknown parameter for which an attempt is made to construct a probability distribution, using Bayes' theorem:
Here the second term, the likelihood of the data given the unknown parameter value θ, depends just on the data obtained and the modelling of the data generation process. However a Bayesian calculation also includes the first term, the prior probability for θ, which takes account of everything the analyst may know or suspect about θ before the data comes in. This information plays no part in the sampling-theory approach; indeed any attempt to include it would be considered "bias" away from what was pointed to purely by the data. To the extent that Bayesian calculations include prior information, it is therefore essentially inevitable that their results will not be "unbiased" in sampling theory terms.
But the results of a Bayesian approach can differ from the sampling theory approach even if the Bayesian tries to adopt an "uninformative" prior.
For example, consider again the estimation of an unknown population variance σ2 of a Normal distribution with unknown mean, where it is desired to optimise c in the expected loss function
A standard choice of uninformative prior for this problem is the Jeffreys prior, , which is equivalent to adopting a rescaling-invariant flat prior for ln(σ2).
One consequence of adopting this prior is that S2/σ2 remains a pivotal quantity, i.e. the probability distribution of S2/σ2 depends only on S2/σ2, independent of the value of S2 or σ2:
However, while
in contrast
— when the expectation is taken over the probability distribution of σ2 given S2, as it is in the Bayesian case, rather than S2 given σ2, one can no longer take σ4 as a constant and factor it out. The consequence of this is that, compared to the sampling-theory calculation, the Bayesian calculation puts more weight on larger values of σ2, properly taking into account (as the sampling-theory calculation cannot) that under this squared-loss function the consequence of underestimating large values of σ2 is more costly in squared-loss terms than that of overestimating small values of σ2.
The worked-out Bayesian calculation gives a scaled inverse chi-squared distribution with n − 1 degrees of freedom for the posterior probability distribution of σ2. The expected loss is minimised when cnS2 = <σ2>; this occurs when c = 1/( n − 3).
Even with an uninformative prior, therefore, a Bayesian calculation may not give the same expected-loss minimising result as the corresponding sampling-theory calculation.
|
|